Goto

Collaborating Authors

 ai guideline


Diversity and Inclusion in AI for Recruitment: Lessons from Industry Workshop

Bano, Muneera, Zowghi, Didar, Mourao, Fernando, Kaur, Sarah, Zhang, Tao

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) systems for online recruitment markets have the potential to significantly enhance the efficiency and effectiveness of job placements and even promote fairness or inclusive hiring practices. Neglecting Diversity and Inclusion (D&I) in these systems, however, can perpetuate biases, leading to unfair hiring practices and decreased workplace diversity, while exposing organisations to legal and reputational risks. Despite the acknowledged importance of D&I in AI, there is a gap in research on effectively implementing D&I guidelines in real-world recruitment systems. Challenges include a lack of awareness and framework for operationalising D&I in a cost-effective, context-sensitive manner. This study aims to investigate the practical application of D&I guidelines in AI-driven online job-seeking systems, specifically exploring how these principles can be operationalised to create more inclusive recruitment processes. We conducted a co-design workshop with a large multinational recruitment company focusing on two AI-driven recruitment use cases. User stories and personas were applied to evaluate the impacts of AI on diverse stakeholders. Follow-up interviews were conducted to assess the workshop's long-term effects on participants' awareness and application of D&I principles. The co-design workshop successfully increased participants' understanding of D&I in AI. However, translating awareness into operational practice posed challenges, particularly in balancing D&I with business goals. The results suggest developing tailored D&I guidelines and ongoing support to ensure the effective adoption of inclusive AI practices.


AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities

Wu, Chuhao, Zhang, He, Carroll, John M.

arXiv.org Artificial Intelligence

Generative AI has drawn significant attention from stakeholders in higher education. As it introduces new opportunities for personalized learning and tutoring support, it simultaneously poses challenges to academic integrity and leads to ethical issues. Consequently, governing responsible AI usage within higher education institutions (HEIs) becomes increasingly important. Leading universities have already published guidelines on Generative AI, with most attempting to embrace this technology responsibly. This study provides a new perspective by focusing on strategies for responsible AI governance as demonstrated in these guidelines. Through a case study of 14 prestigious universities in the United States, we identified the multi-unit governance of AI, the role-specific governance of AI, and the academic characteristics of AI governance from their AI guidelines. The strengths and potential limitations of these strategies and characteristics are discussed. The findings offer practical implications for guiding responsible AI usage in HEIs and beyond.


RE-centric Recommendations for the Development of Trustworthy(er) Autonomous Systems

Ronanki, Krishna, Cabrero-Daniel, Beatriz, Horkoff, Jennifer, Berger, Christian

arXiv.org Artificial Intelligence

Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU. However, practitioners lack actionable instructions to operationalise ethics during AI systems development. A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them. Furthermore, requirements engineering (RE), which is identified to foster trustworthiness in the AI development process from the early stages was observed to be absent in a lot of frameworks that support the development of ethical and trustworthy AI. This incongruous phrasing combined with a lack of concrete development practices makes trustworthy AI development harder. To address this concern, we formulated a comparison table for the terminology used and the coverage of the ethical AI principles in major ethical AI guidelines. We then examined the applicability of ethical AI development frameworks for performing effective RE during the development of trustworthy AI systems. A tertiary review and meta-analysis of literature discussing ethical AI frameworks revealed their limitations when developing trustworthy AI. Based on our findings, we propose recommendations to address such limitations during the development of trustworthy AI.


White House gets seven AI developers to agree to safety, security, trust guidelines

FOX News

Fox News anchor Julie Banderas reacts to the vice president's gaffe and CNN calling Dylan Mulvaney a man on'Jesse Watters Primetime.' The Biden administration announced Friday that seven of the nation's top artificial intelligence developers have agreed to guidelines aimed at ensuring the "safe" deployment of AI. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI all agreed to the guidelines and will participate in a Friday afternoon event with President Biden to tout the voluntary agreement. "Companies that are developing these emerging technologies have a responsibility to ensure their products are safe," the White House said in a Friday morning statement. "To make the most of AI's potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn't come at the expense of Americans' rights and safety."


The Download: digital dollars, and AI guidelines

MIT Technology Review

In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects. Even Facebook has proposed a global digital currency, called Libra. Few eyebrows were raised when the Boston branch of the US Federal Reserve announced a project to research how a CBDC might be technically designed. A hypothetical US central bank digital currency was hardly controversial, after all.


Why You Must Embrace Responsible AI Now

#artificialintelligence

"What we're hearing from our friends and thought leaders in this space that pay close attention to the regulations is just behave as though you're under the European Union's AI Act guidelines, whether you're in Europe, America, or anywhere else," says Roetzer. Regulations like the AI Act will be used as a template by other governments soon. You can't avoid issues around responsible and ethical AI. Regulations will force you to act. Even if you're an AI beginner, you'll quickly run into ethical issues around data, how it's used, and who provides it. You need an AI ethics policy or guidelines.


How the US plans to manage artificial intelligence

#artificialintelligence

US AI guidelines are everything the EU's AI Act is not: voluntary, non-prescriptive and focused on changing the culture of tech companies. As the EU's Artificial Intelligence (AI) Act fights its way through multiple rounds of revisions at the hands of MEPs, in the US a little-known organisation is quietly working up its own guidelines to help channel the development of such a promising and yet perilous technology. In March, the Maryland-based National Institute of Standards and Technology (NIST) released a first draft of its AI Risk Management Framework, which sets out a very different vision from the EU. The work is being led by Elham Tabassi, a computer vision researcher who joined the organisation just over 20 years ago. Then, "We built [AI] systems just because we could," she said.


Artificial Intelligence and the Insurance Industry

#artificialintelligence

The scope of AI in the insurance industry is expanding and now insurers use it in a wide range of business streams, from product development to insurance pricing, risk calculations, campaign management and fraud detection. While the benefits of AI are undisputable, questions around its trustworthiness cannot be neglected. There is a need for insurers to adapt their governance structures and control frameworks and ensure their AI systems do not cross acceptable boundaries. Fairness and inclusion are essential for building trust. Regulatory attention, naturally, is shifting to promoting these principles.


Seven Guidelines to Ensure Ethical AI

#artificialintelligence

The organisation of tomorrow will be built around data, and it will require artificial intelligence to make sense of all that data. Artificial intelligence is a broad discipline with the objective to develop intelligent machines. AI consists of several subfields: Machine learning (ML), a subset of AI that enables machines to learn from data. Reinforcement learning, which is a subset of ML and focuses on artificial agents that use trial and error to improve itself. And deep learning, also a subset of ML that aims to mimic the human brain to detect patterns in large datasets and benefit from those patterns.


Seven Guidelines to Ensure Ethical AI

#artificialintelligence

The organisation of tomorrow will be built around data, and it will require artificial intelligence to make sense of all that data. Artificial intelligence is a broad discipline with the objective to develop intelligent machines. AI consists of several subfields: Machine learning (ML), a subset of AI that enables machines to learn from data. Reinforcement learning, which is a subset of ML and focuses on artificial agents that use trial and error to improve itself. And deep learning, also a subset of ML that aims to mimic the human brain to detect patterns in large datasets and benefit from those patterns.